perm filename SOLCOM.S76[S76,JMC] blob
sn#215249 filedate 1976-05-13 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00008 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .device xgp
C00004 00003 .begin center
C00007 00004 .next page
C00012 00005 .next page
C00026 00006 .next page
C00034 00007 .NEXT PAGE
C00037 00008 .next page
C00046 ENDMK
C⊗;
.device xgp
.!XGPCOMMANDS ← "/TMAR=200/PMAR=1900/BMAR=100"
.REQUIRE "BASKER.PUB[SUB,SYS]" SOURCE FILE; turn on "%,π,α,#,↓_,→"
.FONT 4 "SUB"
.FONT 5 "SUP"
.FONT 6 "FORN25"
.FONT 7 "FIX25"
.FONT 8 "math30"
.FONT 9 "PLUNK"
.FONT A "math50"
.SPACING 10*5 MILLS;
.PAGE FRAME 45 HIGH 78 WIDE
.TITLE AREA HEADING LINES 1 TO 2
.every heading (%3COMPUTER SCIENCE COMPREHENSIVE,,SOLUTIONS%*)
.AREA TEXT LINES 3 TO 43
.SELECT 1
.TITLE AREA FOOTING LINES 45
.EVERY FOOTING (,{PAGE},)
.COUNT PAGE from 2;
.AT "[" TXT "]"; ⊂("%2TXT%*")⊃;
.at "⊗⊗" txt "⊗"; ⊂("%3txt%*")⊃;
.at "≤" ⊂"%7α≤%*"⊃
.at "≥" ⊂"%7α≥%*"⊃
.at "↓" txt "↑" ⊂"%4txt%*"⊃
.at "↑" txt "↓" ⊂"%5txt%*"⊃
.at "∀" ⊂"%8α∀%*"⊃
.at "!∞" ⊂"%7α∞%*"⊃
.at "!cd" ⊂"%7π.%*"⊃
.at "!'" ⊂"%8α'%*"⊃
.at "¬" ⊂"%6α¬%*"⊃
.at "ε" ⊂"%7αε%*"⊃
.at "!≤" ⊂"%8α≤%*"⊃
.at "<=>" ⊂"%7α<α=α>%*"⊃
.at "!∂" ⊂"%8α∂%*"⊃
.at "=>" ⊂"%7α=α>%*"⊃
.at "≠" ⊂"%7α≠%*"⊃
.at "|" ⊂"%6α|%*"⊃
.at "!K" ⊂"%8αK%*"⊃
.at "!5" ⊂"%8α5%*"⊃
.at "!$" ⊂"%Aα$%*"⊃
.at "!(" ⊂"%Aα(%*"⊃
.at "!)" ⊂"%Aα)%*"⊃
.macro bi; ⊂
.indent 8,12,8;
.turn on "\"; tabs 13;
. ⊃
.macro bd; ⊂
.group
.crbreak; nojust;
.preface 0; turn on "\";
. ⊃
.PLACE TEXT
.begin center
SPRING 1976 COMPUTER SCIENCE
COMPREHENSIVE EXAM
SOLUTIONS
.END
.skip 10
.begin bd
.tabs 8,35
THEORY OF COMPUTATION\page 2
.skip
NUMERICAL ANALYSIS\page 3
.skip
SYSTEMS\page 9
.skip
DATA STRUCTURES\page 12
.skip
HARDWARE\page 15
.skip
ARTIFICIAL INTELLIGENCE\page 17
.end
.next page
.skip 2
.once center
⊗⊗Theory of Computation⊗
.skip
(1) Regular Sets
.break
(a) (i) The set is not regular. Its intersection with the regular set
0 10 is {0↑n↓10↑n↓|n≥0}, so by Theorem 3.6 in Hopcroft and Ullman it is
sufficient to show that {0↑n↓10↑n↓|n≥0} is not regular.
Suppose that a finite automaton accepting {0↑n↓10↑n↓|n≥0} has [k] states. In scanning
the first [k]+1 symbols of 0↑k+1↓ 1 0↑k+1↓, some state is visited twice; hence the
(non-empty) string scanned between visits can be removed from the input without
affecting the input's acceptance. This shows that the finite automaton accepts
a string which is not of the form 0↑n↓ 1 0↑n↓.
(ii) The set is regular. Taking [x]=1 we find that the set contains 1(0+1) 1.
But clearly it can contain no more than this, since all strings in the set begin
and end in 1. So the set [is] 1(0+1) 1.
.skip
(b) By Theorem 3.6 in Hopcroft and Ullman, R↓2↑ ∩ R↓1↑ is a regular set, and
this set is empty if and only if R↓1↑ !K R↓2↑. It is significant that a
constructive proof of this theorem is given; by Church's thesis we assert
that a Turing machine can be constructed which reads M↓1↑ and M↓2↑ and produces
M↓3↑ accepting R↓2↑ ∩ R↓1↑. Now if M↓3↑ has [k] states we simply run M↓3↑ on all
inputs of length < [k] and see whether or not any input is accepted. By
Theorem 3.11, R↓2↑ ∩ R↓1↑ is empty iff no input of length < [k] is accepted by M↓3↑.
.skip
(2) Bad Function
.break
(a) If [term] always terminated, it would be a decision procedure for termination
of LISP computations. From this would come a solution of the halting problem
for Turing machines which is known to be unsolvable, but part (b) provides a direct
proof - more explicit than the proofs given for Turing machines.
(b) One solution is
.begin nofill
[bad]α[[u]α] ← [subst]α[[u], U, [quine]α[(LABEL FOO
(LAMBDA(X)(COND((U(QUINE X))(FOO X))(T T))))α]
.end
It is obtained by constructing from [term] a new LISP function FOO that tells
whether a LISP function terminates when it looks at its own S-expression representation,
and gives its answer by not terminating when the answer
is yes and giving T otherwise. [bad]α[[term]*α] is the
S-expression representation of [foo]α[[foo]*α] and the fact that it doesn't terminate
follows from its construction but also from carrying out the indicated
computation on recursion.
.next page
.next page
.next page
.next page
.next page
.next page
.next page
.skip
.once center
⊗⊗Systems⊗
.skip
(1) Storage Allocation
1.a. Run time stack, since the space needed depends on the depth of recursion, and
bindings follow stack discipline.
b. Run time stack, for the same reason. The fact that binding is on block entry
rather than procedure call does not change the needs.
c. Run time general, since there is no way of knowing ahead of time how many
records of what type will be created, and records become free when there are no
references to them, which does not follow scoping discipline.
d. Run time stack, same as a.
e. Run time general, same as c.
f. Run time general, since there needs to be a way to allocate new atom headers
when the READ is executed, and they are not freed up when the function or
block is exited.
g. Compile time. If it's in FORTRAN, it must be.
h. Run time general, since processes are created and destroyed independently of
the normal stack discipline. This can be integrated into a stack, as with
INTERLISP's spaghetti stack.
i. Compile time. Pointers to pieces of code can be allocated like any other variables.
j. Run time stack. like a.
.skip 2
(2) Microcode Usage
2.a.
Op-codes for commonly used instructions (like CONS, MEMBER, etc.)
Micro-coded aid for frequent internal operations like variable binding, stack
manipulations, variable referencing, function call (as in the LISP interpreter), etc.
Compact encodings for programs and data. e.g. ByteLisp packs most instructions
into 9 bits, allowing 4 to be stored per word. Schemes for encoding LISP list
structure have been proposed which get a 2-1 reduction for typical data.
These compacting strategies could be done without microcode, but the encoding
and decoding expense would be too great.
Virtual storage management. Much more sophisticated schemes can be used
without incurring too much overhead, when address translation is done in microcode.
2.b.
Static measurements of how often different operations are called for in existing
programs. This require an analyzer which goes through programs gathering
statistics. An example in LISP is finding out how many calls in typical programs
are to functions with no arguments, one argument, two argments, etc. A result
might be a decision to put in an opcode which was specially designed for calls
to functions of one argument, thereby avoiding some of the overhead associated
with variable numbers of arguments.
Static measurements of how data is actually distributed in typical programs.
For example, if most of list structure involves long CDR chains and short CAR
chains, the optimal encoding would be different than if it was a balanced binary
tree.
Dynamic measurements of how often different operations are done. This requires
a version of the language which has the capability to gather statistics as it runs.
At the level of function calls, it can be done with a tracing mechanism, but to
handle internal things like stack manipulations and vairable lookup, it is necessary
to build in special measurement facilities.
2.c.
If space is the major bottleneck, then the emphasis would be on supporting virtual
memory management and on encoding the programs and data into a more compact
form.
.next page
(3) Memory Mapping
.break
(The machine under discussion is the DEC PDP-11/45.)
a) All we want to do is i) make virtual addresses 0 through BOUND-1 valid, all others invalid,
and ii) map virtual address VA into physical address BASE+VA. This we can do
as long as BOUND and BASE are multiples of 64 bytes (low order 6 bits zero).
To do the simulation, let BOUND = [n] * 8K + [m]. Make MRs 0-[n] resident, upward
expanding and full length (NR="false", ED="up", LF=max='17700), except
that MR [n] has length [m] (LF=[m]-64). Make AF of MR [i] be BASE + [i]*8K. Make all
other MR's nonresident (NR="true").
b) To use the hardware for paging, make MRS 0-7 upward expanding and full length
(ED="up", LF=max='17700), and require that each AF be a multiple of
8K (low 13 bits zero). We get an eight-page address space, each page
of 8K byte length.
c) If we make all MR's upward expanding, we get a segmentation of sorts: there are 8
segments of 8K byte maximum length. We don't have segmentation in the Multics
sense because, for example, there is no provision for control of inter-segment
references (indexing from a pointer in segment [i] can easily produce an
address in segment [j]≠[i]). To be comparable to Multics, the addressing
scheme also needs provision for dynamic linkage of segments by symbolic name.
Segmentation is clearly a bad approach here because of the restriction to
only 8 small segments in a program. Multics allows up to 2↑18↓ segments to
be linked together.
d) The primary reason for having memory mapping on this machine at all is to allow
a large physical memory to be addressed by a machine with a small virtual
address space. Large pages are a problem due to the potential for internal
fragmentation, but having upward- and downward-expandable, variable-length "pages"
may help here. On the other hand, smaller pages would require more overhead to
set up; since the mapping registers are in fixed locations (instead of
being kept in core, pointed to by some register) the overhead of loading and
unloading them can be considerable. Even if page tables are maintained only
in mainstore, the space they take up can be considerable when pages are small.
.next page
.once center
⊗⊗Data Structures⊗
.break
(1) Implementation of Structures
.break
(i) A [stack]. This could be implemented as an array [stack](1::n) with a pointer
[top] to the top of the stack. Initially [top]=0.
.begin bd
.tabs 8,12,16
\Insertion is:
\\⊗⊗if⊗ [top]=[n] ⊗⊗then⊗ [overflow] ⊗⊗else begin⊗ [top] ← [top]+1; [stack]([top]) ← item ⊗⊗end⊗
\Deletion is:
\\⊗⊗if⊗ [top]=0 ⊗⊗then⊗ [empty] ⊗⊗else begin⊗ item ← [stack]([top]); [top] ← [top]-1 ⊗⊗end⊗;
.end
Each operation requires 0(1) time, average and worst-case. A linked list could also be
used to implement a stack.
.skip 2
(ii) A [queue]. This could be implemented as an array [queue] (1::[n]), with
two pointers [front] and [back]. Initially [front]=1, [back]=1. We imagine
that position 1 follows position [n].
.begin bd
.tabs 8,12,16
\Insertion is:
\\⊗⊗if⊗ ([back] ⊗⊗mod⊗ [n])+1 = [front] ⊗⊗then⊗ [overflow] ⊗⊗else begin⊗
\\\[queue] ([back]) ← item;
\\\[back] ← ([back] ⊗⊗mod⊗ [n])+1
\\⊗⊗end⊗;
\Deletion is
\\⊗⊗if⊗ [back]=[front] ⊗⊗then⊗ [empty] ⊗⊗else begin⊗
\\\item ← [queue] ([front]);
\\\[front] ← ([front] ⊗⊗mod⊗ [n])+1
\\⊗⊗end⊗;
.end
Each operation requires 0(1) time, average and worst-case. The queue can store [n]-1
items (at least one array position is left empty so that an empty queue and a full queue
can be distinguished). A linked list could also be used to implement a queue.
.next page
(iii) A [heap]. A heap is a complete binary tree such that the root is the smallest
element and each child of a vertex is at least as large as its parent. We can number
the elements from 1 to [n], such that 1 is the root and 2[x] and 2[x]+1 are the
children of [x]. Then we can store the entire tree in an array.
To insert, we add the new element at position [n]+1 on "sift up", by interchanging
along the path from [n]+1 to the root so that the elements on this path are
ordered. Insertion requires 0(1) time on the average.
To delete, we return the element at position 1, put the element at position [n] in
position 1, and "sift down" by comparing the current element to its two children
and exchanging it with the smaller if necessary. Deletion requires 0(log [n])
time on the average.
.skip 2
(iv) A [hash table]. A hash table is an array of list heads plus a hash function.
The hash function, given an arbitrary real number, computes a position in the
array.
To insert an item, we compute its hash function and add the item to that list.
This always requires 0(1) time.
.turn off "#"
To delete all items of a given value, we compute its hash function and delete all
items of the desired value from that list. If the hash table is large enough (say
larger than the number of items it contains) and the hash function distributes
the values evenly into the table then deletion requires 0(1 + %7#%* items deleted) on the
average.
There are many variations possible on this basic hashing scheme. Another reasonable
answer to this question is a [balanced tree] of some sort (AVL tree, 2-3 tree). For
these structures, the average time per insertion is 0(log [n]) and per deletion is
0(log [n] + %7#%* items deleted).
.next page
.turn on "#"
(2) S↓n↑ tree checker
.skip
.begin nofill
(a)
1
2 4
3
.end
.skip
(b) The following ALGOLW solution attempts to mimic the given definition
of S↓n↑ trees as closely as possible. The main difficulty comes in ensuring
that null RSIBLINGS are properly checked for; here this is accomplished by
stopping the recursion at N=1 rather than N=0.
.begin nofill
PROCEDURE CHECK(REFERENCE(NODE) VALUE S; INTEGER VALUE N);
BEGIN
IF RSIB(S)≠NULL THEN ERROR("Root has brother");
IF N=0 THEN
BEGIN IF LCHILD(S)≠NULL THEN ERROR("Illegitimate child of root") END
ELSE CHECK1(S,LCHILD(S),N )
END CHECK;
PROCEDURE CHECK1(REFERENCE(NODE) VALUE P,C;INTEGER VALUE N );
BEGIN
IF C=NULL THEN ERROR("Missing child");
IF KEY(P)>KEY(C) THEN ERROR("Parent larger than child");
IF N=1 THEN BEGIN
IF LCHILD(C)≠NULL THEN ERROR("Illegitimate child");
IF RSIB(C)≠NULL THEN ERROR("Unexpected sibling")
END
ELSE BEGIN
CHECK1(C,LCHILD(C),N-1);
CHECK1(P,RSIB(C),N-1)
END
END CHECK1;
.END
Cleaner solutions are possible by noticing that an S↓n↑ tree is really a root
whose sons, from left to right, are roots of S↓n-1↑, S↓n-2↑, ..., S↓0↑ trees.
A good non-recursive implementation using this idea requires only two cells of stack space
for each level it descends into the tree.
.NEXT PAGE
.skip
.once center
⊗⊗Hardware⊗
.break
(1) Shift Registers
.begin nofill
i)
Data In →Data Out
Shift
ii)
Data In →Data Out
Shift
.end
We must assume that the inverter is sufficiently fast. (In particular, the delay
through an inverter must be less than the delay through a latch, minus the latch's
hold time.)
.next page
(2) Number Representations
.break
i) Just invert Data Out:
.begin nofill
Data Out 1's compl Data Out
.end
.break
ii) Transmit all Data Out bits unchanged until a `1' is produced, then invert
the rest:
.begin nofill
Data Out 2's compl Data Out
α↑
Xor gate
CARRY
Shift
Start 2's compl Out
.end
Some signal (Start 2's compl Out above) must be used to define when the
register actually contains the number whose 2's complement is to be shifted out.
There will be glitches in 2's compl Data Out when CARRY is set, due to
the unequal delay through the last shift register stage and CARRY.
.next page
.once center
⊗⊗Artificial Intelligence⊗
.break
(1) Resolution Proof
.break
The negation of the goal statement is
.once indent 8
∀[x] ∃[y] (P([x]) ∧ ¬P([y]))
and this gives the clauses
.once indent 8
P([x]) and ¬P([y]([x]))
to be proved inconsistent. A single resolution does it.
.skip 4
(2) Tower of Hanoi
.break
(a) A depth first search avoiding previously visited vertices will take
0(3↑n↓) steps since this is the size of the graph.
(b) A recursive program that generates moving all but one disk to the third
spike as a subgoal followed by moving that disk and moving the pile back
will take 0(2↑n↓) steps.
(c) A suitable problem reduction program can discover that the top disk can be
moved anywhere, so that any condition on the position of the top disk can
always be satisfied. This program will reduce the problem to the [n]-1 disk problem and
then to the [n]-2 disk problem, etc. Such a program will establish a
method of moving the disks in 0([n]) steps although there are 2↑n↓ moves.
(d) To "solve" the problem in 0(1) steps requies that the program
explicitly use mathematical induction or some equivalent mode of reasoning.